Goto

Collaborating Authors

 brute-force search


Task-Difficulty-Aware Efficient Object Arrangement Leveraging Tossing Motions

Kiyokawa, Takuya, Muta, Mahiro, Wan, Weiwei, Harada, Kensuke

arXiv.org Artificial Intelligence

This study explores a pick-and-toss (PT) as an alternative to pick-and-place (PP), allowing a robot to extend its range and improve task efficiency. Although PT boosts efficiency in object arrangement, the placement environment critically affects the success of tossing. To achieve accurate and efficient object arrangement, we suggest choosing between PP and PT based on task difficulty estimated from the placement environment. Our method simultaneously learns the tossing motion through self-supervised learning and the task determination policy via brute-force search. Experimental results validate the proposed method through simulations and real-world tests on various rectangular object arrangements.


Approximate Nearest Neighbour Search on Dynamic Datasets: An Investigation

Harwood, Ben, Dezfouli, Amir, Chades, Iadine, Sanderson, Conrad

arXiv.org Artificial Intelligence

Approximate k-Nearest Neighbour (ANN) methods are often used for mining information and aiding machine learning on large scale high-dimensional datasets. ANN methods typically differ in the index structure used for accelerating searches, resulting in various recall/runtime trade-off points. For applications with static datasets, runtime constraints and dataset properties can be used to empirically select an ANN method with suitable operating characteristics. However, for applications with dynamic datasets, which are subject to frequent online changes (like addition of new samples), there is currently no consensus as to which ANN methods are most suitable. Traditional evaluation approaches do not consider the computational costs of updating the index structure, as well as the rate and size of index updates. To address this, we empirically evaluate 5 popular ANN methods on two main applications (online data collection and online feature learning) while taking into account these considerations. Two dynamic datasets are used, derived from the SIFT1M dataset with 1 million samples and the DEEP1B dataset with 1 billion samples. The results indicate that the often used k-d trees method is not suitable on dynamic datasets as it is slower than a straightforward baseline exhaustive search method. For online data collection, the Hierarchical Navigable Small World Graphs method achieves a consistent speedup over baseline across a wide range of recall rates. For online feature learning, the Scalable Nearest Neighbours method is faster than baseline for recall rates below 75%.


Finding Alignments Between Interpretable Causal Variables and Distributed Neural Representations

Geiger, Atticus, Wu, Zhengxuan, Potts, Christopher, Icard, Thomas, Goodman, Noah D.

arXiv.org Artificial Intelligence

Causal abstraction is a promising theoretical framework for explainable artificial intelligence that defines when an interpretable high-level causal model is a faithful simplification of a low-level deep learning system. However, existing causal abstraction methods have two major limitations: they require a brute-force search over alignments between the high-level model and the low-level one, and they presuppose that variables in the high-level model will align with disjoint sets of neurons in the low-level one. In this paper, we present distributed alignment search (DAS), which overcomes these limitations. In DAS, we find the alignment between high-level and low-level models using gradient descent rather than conducting a brute-force search, and we allow individual neurons to play multiple distinct roles by analyzing representations in non-standard bases-distributed representations. Our experiments show that DAS can discover internal structure that prior approaches miss. Overall, DAS removes previous obstacles to conducting causal abstraction analyses and allows us to find conceptual structure in trained neural nets.


Sturtevant

AAAI Conferences

Search is a recognized technique for procedural content generation and game design, and it has been used successfully as part of commercial and academic games. In this context, search has almost always referred to selective search, as opposed to larger brute-force searches. The argument against brute-force search is that the state spaces of the games are almost always too large to be amenable for brute-force search. We believe, however, that brute-force search should not be too quickly dismissed. State spaces with trillions or tens of trillions states can now be exhaustively searched with relative ease, and growth in parallelism and computational power is expected to continue to scale this trend. We believe that this, combined with appropriate abstraction, will allow exhaustive search to be applied to many problems once thought to be prohibitively large. We explore this argument in the context of a game called "Fling!," available for mobile devices, showing a system for interactively designing and analyzing puzzles.


A New Sentence Ordering Method Using BERT Pretrained Model

Golestani, Melika, Razavi, Seyedeh Zahra, Faili, Heshaam

arXiv.org Artificial Intelligence

Building systems with capability of natural language understanding (NLU) has been one of the oldest areas of AI. An essential component of NLU is to detect logical succession of events contained in a text. The task of sentence ordering is proposed to learn succession of events with applications in AI tasks. The performance of previous works employing statistical methods is poor, while the neural networks-based approaches are in serious need of large corpora for model learning. In this paper, we propose a method for sentence ordering which does not need a training phase and consequently a large corpus for learning. To this end, we generate sentence embedding using BERT pre-trained model and measure sentence similarity using cosine similarity score. We suggest this score as an indicator of sequential events' level of coherence. We finally sort the sentences through brute-force search to maximize overall similarities of the sequenced sentences. Our proposed method outperformed other baselines on ROCStories, a corpus of 5-sentence human-made stories. The method is specifically more efficient than neural network-based methods when no huge corpus is available. Among other advantages of this method are its interpretability and needlessness to linguistic knowledge.


Limitations of Front-To-End Bidirectional Heuristic Search

Barker, Joseph K. (University of California, Los Angeles) | Korf, Richard E. (University of California, Los Angeles)

AAAI Conferences

We present an intuitive explanation for the limited effectiveness of front-to-end bidirectional heuristic search, supported with extensive evidence from many commonly-studied domains. While previous work has proved the limitations of specific algorithms, we show that any front-to-end bidirectional heuristic search algorithm will likely be dominated by unidirectional heuristic search or bidirectional brute-force search. We also demonstrate a pathological case where bidirectional heuristic search is the dominant algorithm, so a stronger claim cannot be made. Finally, we show that on the four-peg Towers Of Hanoi with arbitrary start and goal states, bidirectional brute-force search outperforms unidirectional heuristic search using pattern-database heuristics.